Diffusion Probabilistic Models (DPMs) have shown a powerful capacity of generating high-quality image samples. Recently, diffusion autoencoders (Diff-AE) have been proposed to explore DPMs for representation learning via autoencoding. Their key idea is to jointly train an encoder for discovering meaningful representations from images and a conditional DPM as the decoder for reconstructing images. Considering that training DPMs from scratch will take a long time and there have existed numerous pre-trained DPMs, we propose \textbf{P}re-trained \textbf{D}PM \textbf{A}uto\textbf{E}ncoding (\textbf{PDAE}), a general method to adapt existing pre-trained DPMs to the decoders for image reconstruction, with better training efficiency and performance than Diff-AE. Specifically, we find that the reason that pre-trained DPMs fail to reconstruct an image from its latent variables is due to the information loss of forward process, which causes a gap between their predicted posterior mean and the true one. From this perspective, the classifier-guided sampling method can be explained as computing an extra mean shift to fill the gap, reconstructing the lost class information in samples. These imply that the gap corresponds to the lost information of the image, and we can reconstruct the image by filling the gap. Drawing inspiration from this, we employ a trainable model to predict a mean shift according to encoded representation and train it to fill as much gap as possible, in this way, the encoder is forced to learn as much information as possible from images to help the filling. By reusing a part of network of pre-trained DPMs and redesigning the weighting scheme of diffusion loss, PDAE can learn meaningful representations from images efficiently. Extensive experiments demonstrate the effectiveness, efficiency and flexibility of PDAE.
translated by 谷歌翻译
When facing changing environments in the real world, the lightweight model on client devices suffers from severe performance drops under distribution shifts. The main limitations of the existing device model lie in (1) unable to update due to the computation limit of the device, (2) the limited generalization ability of the lightweight model. Meanwhile, recent large models have shown strong generalization capability on the cloud while they can not be deployed on client devices due to poor computation constraints. To enable the device model to deal with changing environments, we propose a new learning paradigm of Cloud-Device Collaborative Continual Adaptation, which encourages collaboration between cloud and device and improves the generalization of the device model. Based on this paradigm, we further propose an Uncertainty-based Visual Prompt Adapted (U-VPA) teacher-student model to transfer the generalization capability of the large model on the cloud to the device model. Specifically, we first design the Uncertainty Guided Sampling (UGS) to screen out challenging data continuously and transmit the most out-of-distribution samples from the device to the cloud. Then we propose a Visual Prompt Learning Strategy with Uncertainty guided updating (VPLU) to specifically deal with the selected samples with more distribution shifts. We transmit the visual prompts to the device and concatenate them with the incoming data to pull the device testing distribution closer to the cloud training distribution. We conduct extensive experiments on two object detection datasets with continually changing environments. Our proposed U-VPA teacher-student framework outperforms previous state-of-the-art test time adaptation and device-cloud collaboration methods. The code and datasets will be released.
translated by 谷歌翻译
This paper mainly describes the dma submission to the TempoWiC task, which achieves a macro-F1 score of 77.05% and attains the first place in this task. We first explore the impact of different pre-trained language models. Then we adopt data cleaning, data augmentation, and adversarial training strategies to enhance the model generalization and robustness. For further improvement, we integrate POS information and word semantic representation using a Mixture-of-Experts (MoE) approach. The experimental results show that MoE can overcome the feature overuse issue and combine the context, POS, and word semantic features well. Additionally, we use a model ensemble method for the final prediction, which has been proven effective by many research works.
translated by 谷歌翻译
本文提出了Salenet-端到端卷积神经网络(CNN),用于使用前额叶脑电图(EEG)进行持续注意水平评估。提出了一种偏置驱动的修剪方法,以及小组卷积,全局平均池(GAP),接近零的修剪,重量聚类和模型压缩的量化,达到183.11x的总压缩比。在这项工作中,压缩的分配器在记录的6个受试者EEG数据库上获得了最新的主题无关的持续注意力分类精度为84.2%。该沙发在ARTIX-7 FPGA上实施,竞争功耗为0.11 W,能源效率为8.19 GOPS/W。
translated by 谷歌翻译
在自然语言处理中,广泛使用黑框模型为对决策基础的理解,预测结果的可信度以及改善模型性能带来了巨大挑战。文本样本中的单词具有反映其语义和上下文信息的属性,例如语音,位置等。这些属性可能与显着性一词具有一定的关系,这有助于研究模型的解释性预测。在本文中,我们探讨了“显着性”一词和属性一词之间的关系。根据分析结果,我们进一步建立了一个映射模型Seq2Sality,从文本样本中的单词及其属性到基于序列标记的概念的显着性值。此外,我们建立了一个名为PRSALM的新数据集,该数据集包含文本示例中的每个单词,单词属性和单词显着性值。进行实验评估以分析具有不同特性的单词的显着性。验证了SEQ2Sality模型的有效性。
translated by 谷歌翻译
Federated学习(FL)最近已成为流行的隐私合作学习范式。但是,它遭受了客户之间非独立和相同分布的(非IID)数据的困扰。在本文中,我们提出了一个新颖的框架,称为合成数据辅助联合学习(SDA-FL),以通过共享合成数据来解决这一非IID挑战。具体而言,每个客户端都预测了本地生成对抗网络(GAN)以生成差异化私有合成数据,这些数据被上传到参数服务器(PS)以构建全局共享的合成数据集。为了为合成数据集生成自信的伪标签,我们还提出了PS执行的迭代伪标记机制。本地私人数据集和合成数据集与自信的伪标签的结合可导致客户之间的数据分布几乎相同,从而提高了本地模型之间的一致性并使全球聚合受益。广泛的实验证明,在监督和半监督的设置下,所提出的框架在几个基准数据集中的大幅度优于基线方法。
translated by 谷歌翻译
语言模型既展示了定量的改进,又展示了新的定性功能,随着规模的增加。尽管它们具有潜在的变革性影响,但这些新能力的特征却很差。为了为未来的研究提供信息,为破坏性的新模型能力做准备,并改善社会有害的效果,至关重要的是,我们必须了解目前和近乎未来的能力和语言模型的局限性。为了应对这一挑战,我们介绍了超越模仿游戏基准(Big Bench)。 Big Bench目前由204个任务组成,由132家机构的442位作者贡献。任务主题是多样的,从语言学,儿童发展,数学,常识性推理,生物学,物理学,社会偏见,软件开发等等。 Big-Bench专注于被认为超出当前语言模型的功能的任务。我们评估了OpenAI的GPT型号,Google内部密集变压器体系结构和大型基础上的开关稀疏变压器的行为,跨越了数百万到数十亿个参数。此外,一个人类专家评估者团队执行了所有任务,以提供强大的基准。研究结果包括:模型性能和校准都随规模改善,但绝对的术语(以及与评估者的性能相比);在模型类中的性能非常相似,尽管带有稀疏性。逐渐和预测的任务通常涉及大量知识或记忆成分,而在临界规模上表现出“突破性”行为的任务通常涉及多个步骤或组成部分或脆性指标;社交偏见通常会随着含糊不清的环境而随着规模而增加,但这可以通过提示来改善。
translated by 谷歌翻译
Twitter机器人检测已成为打击错误信息,促进社交媒体节制并保持在线话语的完整性的越来越重要的任务。最先进的机器人检测方法通常利用Twitter网络的图形结构,在面对传统方法无法检测到的新型Twitter机器人时,它们表现出令人鼓舞的性能。但是,现有的Twitter机器人检测数据集很少是基于图形的,即使这些基于图形的数据集也遭受有限的数据集量表,不完整的图形结构以及低注释质量。实际上,缺乏解决这些问题的大规模基于图的Twitter机器人检测基准,严重阻碍了基于图形的机器人检测方法的开发和评估。在本文中,我们提出了Twibot-22,这是一个综合基于图的Twitter机器人检测基准,它显示了迄今为止最大的数据集,在Twitter网络上提供了多元化的实体和关系,并且与现有数据集相比具有更好的注释质量。此外,我们重新实施35代表性的Twitter机器人检测基线,并在包括Twibot-22在内的9个数据集上进行评估,以促进对模型性能和对研究进度的整体了解的公平比较。为了促进进一步的研究,我们将所有实施的代码和数据集巩固到Twibot-22评估框架中,研究人员可以在其中始终如一地评估新的模型和数据集。 Twibot-22 Twitter机器人检测基准和评估框架可在https://twibot22.github.io/上公开获得。
translated by 谷歌翻译
数据增强是自然语言处理(NLP)模型的鲁棒性评估的重要组成部分,以及增强他们培训的数据的多样性。在本文中,我们呈现NL-Cogmenter,这是一种新的参与式Python的自然语言增强框架,它支持创建两个转换(对数据的修改)和过滤器(根据特定功能的数据拆分)。我们描述了框架和初始的117个变换和23个过滤器,用于各种自然语言任务。我们通过使用其几个转换来分析流行自然语言模型的鲁棒性来证明NL-Upmenter的功效。基础架构,Datacards和稳健性分析结果在NL-Augmenter存储库上公开可用(\ url {https://github.com/gem-benchmark/nl-augmenter})。
translated by 谷歌翻译
最近,使用批评者分配表示截断的分量批评者(TQC),显示在Mujoco连续控制基准套件的所有环境中提供最先进的渐近培训表现。此外,使用高更新到数据比和目标随机化的随机集合双Q学习(REDQ)达到了具有基于最先进的模型的方法竞争的高样本效率。在本文中,我们提出了一种新的无模型算法,具有集合(AQE)的激进Q学习,这提高了REDQ的样品效率性能和TQC的渐近性能,从而提供了整体最先进的性能在培训的所有阶段。此外,AQE非常简单,要求批评者的分布表示也不是目标随机化。
translated by 谷歌翻译